Goto

Collaborating Authors

 Nayarit


Dating apps used in Mexico to lure and kidnap U.S. citizens, officials warn

Los Angeles Times

U.S. citizens who visit Mexico are being warned that they may be at risk of being kidnapped by people who lure them in through dating apps, according to federal officials. The U.S. Consulate General Guadalajara warned that the victims of such schemes were kidnapped in Puerto Vallarta and Nuevo Nayarit areas in recent months, according to a news release. The consulate did not say how often this type of crime has occurred or whether any suspects have been arrested. Victims and their family members were extorted for large amounts of money in order to be released, officials said. Some of the victims met their captors in residences or hotel rooms.


Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification

Abzaliev, Artem, Espinosa, Humberto Pérez, Mihalcea, Rada

arXiv.org Artificial Intelligence

Similar to humans, animals make extensive use of verbal and non-verbal forms of communication, including a large range of audio signals. In this paper, we address dog vocalizations and explore the use of self-supervised speech representation models pre-trained on human speech to address dog bark classification tasks that find parallels in human-centered tasks in speech recognition. We specifically address four tasks: dog recognition, breed identification, gender classification, and context grounding. We show that using speech embedding representations significantly improves over simpler classification baselines. Further, we also find that models pre-trained on large human speech acoustics can provide additional performance boosts on several tasks.


Analysis of Systems' Performance in Natural Language Processing Competitions

Nava-Muñoz, Sergio, Graff, Mario, Escalante, Hugo Jair

arXiv.org Artificial Intelligence

Collaborative competitions have gained popularity in the scientific and technological fields. These competitions involve defining tasks, selecting evaluation scores, and devising result verification methods. In the standard scenario, participants receive a training set and are expected to provide a solution for a held-out dataset kept by organizers. An essential challenge for organizers arises when comparing algorithms' performance, assessing multiple participants, and ranking them. Statistical tools are often used for this purpose; however, traditional statistical methods often fail to capture decisive differences between systems' performance. This manuscript describes an evaluation methodology for statistically analyzing competition results and competition. The methodology is designed to be universally applicable; however, it is illustrated using eight natural language competitions as case studies involving classification and regression problems. The proposed methodology offers several advantages, including off-the-shell comparisons with correction mechanisms and the inclusion of confidence intervals. Furthermore, we introduce metrics that allow organizers to assess the difficulty of competitions. Our analysis shows the potential usefulness of our methodology for effectively evaluating competition results.


Ethical Considerations for Machine Translation of Indigenous Languages: Giving a Voice to the Speakers

Mager, Manuel, Mager, Elisabeth, Kann, Katharina, Vu, Ngoc Thang

arXiv.org Artificial Intelligence

In recent years machine translation has become very successful for high-resource language pairs. This has also sparked new interest in research on the automatic translation of low-resource languages, including Indigenous languages. However, the latter are deeply related to the ethnic and cultural groups that speak (or used to speak) them. The data collection, modeling and deploying machine translation systems thus result in new ethical questions that must be addressed. Motivated by this, we first survey the existing literature on ethical considerations for the documentation, translation, and general natural language processing for Indigenous languages. Afterward, we conduct and analyze an interview study to shed light on the positions of community leaders, teachers, and language activists regarding ethical concerns for the automatic translation of their languages. Our results show that the inclusion, at different degrees, of native speakers and community members is vital to performing better and more ethical research on Indigenous languages.


Context Generation Improves Open Domain Question Answering

Su, Dan, Patwary, Mostofa, Prabhumoye, Shrimai, Xu, Peng, Prenger, Ryan, Shoeybi, Mohammad, Fung, Pascale, Anandkumar, Anima, Catanzaro, Bryan

arXiv.org Artificial Intelligence

Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this issue, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract relevant knowledge and answer a question. Our approach first generates a related context for a given question by prompting a pretrained LM. We then prompt the same LM for answer prediction using the generated context and the question. Additionally, to eliminate failure caused by context uncertainty, we marginalize over generated contexts. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods (e.g. exact matching 68.6% vs. 55.3%), and is on par with open-book methods that exploit external knowledge sources (e.g. 68.6% vs. 68.0%). Our method is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge.


Generative Long-form Question Answering: Relevance, Faithfulness and Succinctness

Su, Dan

arXiv.org Artificial Intelligence

In this thesis, we investigated the relevance, faithfulness, and succinctness aspects of Long Form Question Answering (LFQA). LFQA aims to generate an in-depth, paragraph-length answer for a given question, to help bridge the gap between real scenarios and the existing open-domain QA models which can only extract short-span answers. LFQA is quite challenging and under-explored. Few works have been done to build an effective LFQA system. It is even more challenging to generate a good-quality long-form answer relevant to the query and faithful to facts, since a considerable amount of redundant, complementary, or contradictory information will be contained in the retrieved documents. Moreover, no prior work has been investigated to generate succinct answers. We are among the first to research the LFQA task. We pioneered the research direction to improve the answer quality in terms of 1) query-relevance, 2) answer faithfulness, and 3) answer succinctness.


ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

Qin, Yujia, Lin, Yankai, Takanobu, Ryuichi, Liu, Zhiyuan, Li, Peng, Ji, Heng, Huang, Minlie, Sun, Maosong, Zhou, Jie

arXiv.org Artificial Intelligence

Pre-trained Language Models (PLMs) have shown strong performance in various downstream Natural Language Processing (NLP) tasks. However, PLMs still cannot well capture the factual knowledge in the text, which is crucial for understanding the whole text, especially for document-level language understanding tasks. To address this issue, we propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text. Specifically, (1) to better understand entities, we propose an entity discrimination task that distinguishes which tail entity can be inferred by the given head entity and relation. (2) Besides, to better understand relations, we employ a relation discrimination task which distinguishes whether two entity pairs are close or not in relational semantics. Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks, including relation extraction and reading comprehension, especially under low resource setting. Meanwhile, ERICA achieves comparable or better performance on sentence-level tasks. We will release the datasets, source codes and pre-trained language models for further research explorations.


Low-pitched, rumbling rocks could help predict when earthquakes strike, research says

The Japan Times

TEPIC, MEXICO – Rocks under increasing pressure before earthquakes strike send out low-pitched rumbling sounds that the human ear cannot detect but could be used to predict when a tremor will strike, scientists said Monday. Researchers recreated powerful earthquake forces in a laboratory and used high-tech algorithms to pick out the acoustic clues amid all the other noise of a pending quake, according to findings published in Geophysical Research Letters, a journal published by the American Geophysical Union. The sounds are emitted typically a week before an earthquake occurs, so deciphering them would allow scientists to pinpoint the timing of a tremor, the research paper said. Scientists currently can calculate the probability of an earthquake in a particular area but not when it will happen, according to the U.S. Geological Survey. "People have said you can't predict earthquakes. We're now saying we believe for the first time we can predict an earthquake in a laboratory," said Colin Humphreys, professor of materials science at Cambridge University and one of the paper's authors.